Generating Facial Expressions for Speech

نویسندگان

  • Catherine Pelachaud
  • Norman I. Badler
  • Mark Steedman
چکیده

This paper reports results from a program that produces high quality animation of facial expressions and head movements as automatically as possible in conjunction with meaning-based speech synthesis, including spoken intonation. The goal of the research is as much to test and define our theories of the formal semantics for such gestures, as to produce convincing animation. Towards this end we have produced a high level programming language for 3D animation of facial expressions. We have been concerned primarily with expressions conveying information correlated with the intonation of the voice: this includes the differences of timing, pitch, and emphasis that are related to such semantic distinctions of discourse as “focus”, “topic” and “comment”, “theme” and “rheme”, or “given” and “new” information. We are also interested in the relation of affect or emotion to facial expression. Until now, systems have not embodied such rule-governed translation from spoken utterance meaning to facial expressions. Our system embodies rules that describe and coordinate these relations: intonation/information, intonation/affect and facial expressions/affect. A meaning representation includes discourse information: what is contrastive/background information in the given context, and what is the “topic” or “theme” of the discourse. The system maps the meaning representation into how accents and their placement are chosen, how they are conveyed over facial expression and how speech and facial expressions are coordinated. This determines a sequence of functional groups: lip shapes, conversational signals, punctuators, regulators or manipulators. Our algorithms then impose synchrony, create coarticulation effects, and determine affectual signals, eye and head movements. The lowest level representation is the Facial Action Coding System (FACS), which makes the generation system portable to other facial models. Disciplines Computer Sciences | Engineering | Graphics and Human Computer Interfaces This journal article is available at ScholarlyCommons: http://repository.upenn.edu/hms/192 Generating Facial Expressions for Speech

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sight and sound: generating facial expressions and spoken intonation from context

This paper presents an implemented system for automatically producing prosodically appropriate speech and corresponding facial expressions for animated, three-dimensional agents that respond to simple database queries. Unlike previous text-to-facial animation approaches, the system described here produces synthesized speech and facial animations entirely from scratch, starting with semantic rep...

متن کامل

WinkTalk: a demonstration of a multimodal speech synthesis platform linking facial expressions to expressive synthetic voices

This paper describes a demonstration of the WinkTalk system, which is a speech synthesis platform using expressive synthetic voices. With the help of a webcamera and facial expression analysis, the system allows the user to control the expressive features of the synthetic speech for a particular utterance with their facial expressions. Based on a personalised mapping between three expressive sy...

متن کامل

Realistic Speech Animation of Synthetic Faces

In this study, we combined physically-based modeling and parameterization to generate realistic speech animation on synthetic faces. We used physically-based modeling for muscles. Muscles are modeled as forces deforming the mesh of polygons. Parameterization technique is used for generating mouth shapes for speech animation. Each meaningful part of a text,which is a letter in our case,correspon...

متن کامل

Smile Analyzer: A Software Package for Analyzing the Characteristics of the Speech and Smile

Taking into account the factors related to lip-tooth relationships in orthodontic diagnosis and treatment planning is of prime importance. Manual quantitative analysis of facial parameters on photographs during smile and speech is a difficult and time-consuming job. Since there is no comprehensive and user-friendly software package, we developed a software program called "Smile Analyzer" in the...

متن کامل

Virtual Emotion to Expression: A Comprehensive Dynamic Emotion Model to Facial Expression Generation using the MPEG-4 Standard

In this paper we present a framework for generating dynamic facial expressions synchronized with speech, rendered using a tridimensional realistic face. Dynamic facial expressions are those temporal-based facial expressions semantically related with emotions, speech and affective inputs that can modify a facial animation behavior. The framework is composed by an emotion model for speech virtual...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Cognitive Science

دوره 20  شماره 

صفحات  -

تاریخ انتشار 1996